Learning Long-Horizon Robot Exploration Strategies for Multi-object Search in Continuous Action Spaces
نویسندگان
چکیده
Recent advances in vision-based navigation and exploration have shown impressive capabilities photorealistic indoor environments. However, these methods still struggle with long-horizon tasks require large amounts of data to generalize unseen In this work, we present a novel reinforcement learning approach for multi-object search that combines short-term long-term reasoning single model while avoiding the complexities arising from hierarchical structures. contrast existing act granular discrete action spaces, our achieves exceptional performance continuous spaces. We perform extensive experiments show it generalizes apartment environments limited data. Furthermore, demonstrate zero-shot transfer learned policies an office environment real world experiments.
منابع مشابه
Safe Exploration in Continuous Action Spaces
We address the problem of deploying a reinforcement learning (RL) agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated. We show how to exploit the typically smooth dynamics of these systems and enable RL algorithms to never violate constraints during learning. Our technique is to directly add to the policy a safety layer that a...
متن کاملPerformance Analysis for Multi-robot Exploration Strategies
In this note, we compare four different exploration strategies and analyze the performance in terms of exploration time and amount of exploration per time step. In order to provide a suitable reference for comparison, we derive an upper bound for the theoretically achievable increase of explored area per time step. The comparison itself is done using a comprehensive empirical evaluation resulti...
متن کاملMulti-resolution Exploration in Continuous Spaces
The essence of exploration is acting to try to decrease uncertainty. We propose a new methodology for representing uncertainty in continuous-state control problems. Our approach, multi-resolution exploration (MRE), uses a hierarchical mapping to identify regions of the state space that would benefit from additional samples. We demonstrate MRE’s broad utility by using it to speed up learning in ...
متن کاملRobot Learning: Exploration and Continuous Domains
The goal of this workshop was to discuss two major issues: efficient exploration of a learner's state space, and learning in continuous domains. The common themes that emerged in presentations and in discussion were the importance of choosing one's domain assumptions carefully, mixing controllers/strategies, avoidance of catastrophic failure, new approaches with difficulties with reinforcement ...
متن کاملFast Reinforcement Learning in Continuous Action Spaces
A new fast algorithm for reinforcement learning (RL) in continuous state and action spaces is proposed. Unlike algorithms based on dynamic programming, the proposed algorithm uses neither temporal nor spatial diierencing. Instead, it couples the solution of the Hamilton-Jacobi-Bellman (HJB) partial diierential equation with the structure of the function approximators that are commonly used toge...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Springer proceedings in advanced robotics
سال: 2023
ISSN: ['2511-1256', '2511-1264']
DOI: https://doi.org/10.1007/978-3-031-25555-7_5